229 research outputs found

    Near-optimal mean estimators with respect to general norms

    Full text link
    We study the problem of estimating the mean of a random vector in Rd\mathbb{R}^d based on an i.i.d.\ sample, when the accuracy of the estimator is measured by a general norm on Rd\mathbb{R}^d. We construct an estimator (that depends on the norm) that achieves an essentially optimal accuracy/confidence tradeoff under the only assumption that the random vector has a well-defined covariance matrix. The estimator is based on the construction of a uniform median-of-means estimator in a class of real valued functions that may be of independent interest

    Strategies for sequential prediction of stationary time series

    Get PDF
    We present simple procedures for the prediction of a real valued sequence. The algorithms are based on a combination of several simple predictors. We show that if the sequence is a realization of a bounded stationary and ergodic random process then the average of squared errors converges, almost surely, to that of the optimum, given by the Bayes predictor. We offer an analog result for the prediction of stationary gaussian processes.Sequential prediction, ergodic process, individual sequence, gaussian process

    Desigualtats de concentració

    Get PDF
    Les lleis dels grans nombres en la teoria clàssica de probabilitats asseguren que la suma de variables aleatòries independents es troba, sota certes condicions febles, molt a prop del seu valor esperat amb alta probabilitat. Aquestes sumes són l'exemple més senzill de variables aleatòries concentrades al voltant de la seva mitjana. Alguns resultats més recents revelen que aquest comportament és compartit per una immensa classe de funcions de variables aleatòries independents. Aquests resultats es coneixen generalment com a desigualtats de concentració. El propòsit d'aquest article és oferir una introducció a algunes d'aquestes desigualtats.The laws of large numbers of classical probability theory state that sums of independent random variables are, under very mild conditions, close to their expected value with large probability. Such sums are the most basic examples of random variables concentrated around their mean. More recent results reveal that such a behavior is shared by a large class of general functions of independent random variables. Such results go generally under the name of “concentration inequalities.” The purpose of this article is to offer an introduction to some of these inequalities

    Discussion of ``2004 IMS Medallion Lecture: Local Rademacher complexities and oracle inequalities in risk minimization'' by V. Koltchinskii

    Full text link
    Discussion of ``2004 IMS Medallion Lecture: Local Rademacher complexities and oracle inequalities in risk minimization'' by V. Koltchinskii [arXiv:0708.0083]Comment: Published at http://dx.doi.org/10.1214/009053606000001046 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Detecting a Path of Correlations in a Network

    Full text link
    We consider the problem of detecting an anomaly in the form of a path of correlations hidden in white noise. We provide a minimax lower bound and a test that, under mild assumptions, is able to achieve the lower bound up to a multiplicative constant.Comment: arXiv admin note: text overlap with arXiv:1504.0698

    Bandits with heavy tail

    Full text link
    The stochastic multi-armed bandit problem is well understood when the reward distributions are sub-Gaussian. In this paper we examine the bandit problem under the weaker assumption that the distributions have moments of order 1+\epsilon, for some ϵ(0,1]\epsilon \in (0,1]. Surprisingly, moments of order 2 (i.e., finite variance) are sufficient to obtain regret bounds of the same order as under sub-Gaussian reward distributions. In order to achieve such regret, we define sampling strategies based on refined estimators of the mean such as the truncated empirical mean, Catoni's M-estimator, and the median-of-means estimator. We also derive matching lower bounds that also show that the best achievable regret deteriorates when \epsilon <1
    corecore